2,756 research outputs found

    Stable multispeed lattice Boltzmann methods

    Full text link
    We demonstrate how to produce a stable multispeed lattice Boltzmann method (LBM) for a wide range of velocity sets, many of which were previously thought to be intrinsically unstable. We use non-Gauss--Hermitian cubatures. The method operates stably for almost zero viscosity, has second-order accuracy, suppresses typical spurious oscillation (only a modest Gibbs effect is present) and introduces no artificial viscosity. There is almost no computational cost for this innovation. DISCLAIMER: Additional tests and wide discussion of this preprint show that the claimed property of coupled steps: no artificial dissipation and the second-order accuracy of the method are valid only on sufficiently fine grids. For coarse grids the higher-order terms destroy coupling of steps and additional dissipation appears. The equations are true.Comment: Disclaimer about the area of applicability is added to abstrac

    Error estimates for interpolation of rough data using the scattered shifts of a radial basis function

    Full text link
    The error between appropriately smooth functions and their radial basis function interpolants, as the interpolation points fill out a bounded domain in R^d, is a well studied artifact. In all of these cases, the analysis takes place in a natural function space dictated by the choice of radial basis function -- the native space. The native space contains functions possessing a certain amount of smoothness. This paper establishes error estimates when the function being interpolated is conspicuously rough.Comment: 12 page

    Enhancing SPH using moving least-squares and radial basis functions

    Full text link
    In this paper we consider two sources of enhancement for the meshfree Lagrangian particle method smoothed particle hydrodynamics (SPH) by improving the accuracy of the particle approximation. Namely, we will consider shape functions constructed using: moving least-squares approximation (MLS); radial basis functions (RBF). Using MLS approximation is appealing because polynomial consistency of the particle approximation can be enforced. RBFs further appeal as they allow one to dispense with the smoothing-length -- the parameter in the SPH method which governs the number of particles within the support of the shape function. Currently, only ad hoc methods for choosing the smoothing-length exist. We ensure that any enhancement retains the conservative and meshfree nature of SPH. In doing so, we derive a new set of variationally-consistent hydrodynamic equations. Finally, we demonstrate the performance of the new equations on the Sod shock tube problem.Comment: 10 pages, 3 figures, In Proc. A4A5, Chester UK, Jul. 18-22 200

    Stabilisation of the lattice-Boltzmann method using the Ehrenfests' coarse-graining

    Full text link
    The lattice-Boltzmann method (LBM) and its variants have emerged as promising, computationally efficient and increasingly popular numerical methods for modelling complex fluid flow. However, it is acknowledged that the method can demonstrate numerical instabilities, e.g., in the vicinity of shocks. We propose a simple and novel technique to stabilise the lattice-Boltzmann method by monitoring the difference between microscopic and macroscopic entropy. Populations are returned to their equilibrium states if a threshold value is exceeded. We coin the name Ehrenfests' steps for this procedure in homage to the vehicle that we use to introduce the procedure, namely, the Ehrenfests' idea of coarse-graining. The one-dimensional shock tube for a compressible isothermal fluid is a standard benchmark test for hydrodynamic codes. We observe that, of all the LBMs considered in the numerical experiment with the one-dimensional shock tube, only the method which includes Ehrenfests' steps is capable of suppressing spurious post-shock oscillations.Comment: 4 pages, 9 figure

    Extending the range of error estimates for radial approximation in Euclidean space and on spheres

    Full text link
    We adapt Schaback's error doubling trick [R. Schaback. Improved error bounds for scattered data interpolation by radial basis functions. Math. Comp., 68(225):201--216, 1999.] to give error estimates for radial interpolation of functions with smoothness lying (in some sense) between that of the usual native space and the subspace with double the smoothness. We do this for both bounded subsets of R^d and spheres. As a step on the way to our ultimate goal we also show convergence of pseudoderivatives of the interpolation error.Comment: 10 page

    Investigating benchmark correlations when comparing algorithms with parameter tuning: detailed experiments and results.

    Get PDF
    Benchmarks are important to demonstrate the utility of optimisation algorithms, but there is controversy about the practice of benchmarking; we could select instances that present our algorithm favourably, and dismiss those on which our algorithm underperforms. Several papers highlight the pitfalls concerned with benchmarking, some of which concern the context of the automated design of algorithms, where we use a set of problem instances (benchmarks) to train our algorithm. As with machine learning, if the training set does not reflect the test set, the algorithm will not generalize. This raises some open questions concerning the use of test instances to automatically design algorithms. We use differential evolution and sweep the parameter settings to investigate the practice of benchmarking using the BBOB benchmarks. We make three key findings. Firstly, several benchmark functions are highly correlated. This may lead to the false conclusion that an algorithm performs well in general, when it performs poorly on a few key instances, possibly introducing unwanted bias to a resulting automatically designed algorithm. Secondly, the number of evaluations can have a large effect on the conclusion. Finally, a systematic sweep of the parameters shows how performance varies with time across the space of algorithm configurations. The datasets, including all computed features, the evolved policies and their performances, and the visualisations for all feature sets are available from the University of Stirling Data Repository (http://hdl.handle.net/11667/109)

    Investigating benchmark correlations when comparing algorithms with parameter tuning.

    Get PDF
    Benchmarks are important for comparing performance of optimisation algorithms, but we can select instances that present our algorithm favourably, and dismiss those on which our algorithm under-performs. Also related are automated design of algorithms, which use problem instances (benchmarks) to train an algorithm: careful choice of instances is needed for the algorithm to generalise. We sweep parameter settings of differential evolution to applied to the BBOB benchmarks. Several benchmark functions are highly correlated. This may lead to the false conclusion that an algorithm performs well in general, when it performs poorly on a few key instances. These correlations vary with the number of evaluations
    • …
    corecore